The performance of automatic speech recognition (ASR) has improvedtremendously due to the application of deep neural networks (DNNs). Despitethis progress, building a new ASR system remains a challenging task, requiringvarious resources, multiple training stages and significant expertise. Thispaper presents our Eesen framework which drastically simplifies the existingpipeline to build state-of-the-art ASR systems. Acoustic modeling in Eeseninvolves learning a single recurrent neural network (RNN) predictingcontext-independent targets (phonemes or characters). To remove the need forpre-generated frame labels, we adopt the connectionist temporal classification(CTC) objective function to infer the alignments between speech and labelsequences. A distinctive feature of Eesen is a generalized decoding approachbased on weighted finite-state transducers (WFSTs), which enables the efficientincorporation of lexicons and language models into CTC decoding. Experimentsshow that compared with the standard hybrid DNN systems, Eesen achievescomparable word error rates (WERs), while at the same time speeding up decodingsignificantly.
展开▼